Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Collaborative offloading strategy in internet of vehicles based on asynchronous deep reinforcement learning
Xiaoyan ZHAO, Wei HAN, Junna ZHANG, Peiyan YUAN
Journal of Computer Applications    2024, 44 (5): 1501-1510.   DOI: 10.11772/j.issn.1001-9081.2023050788
Abstract122)   HTML0)    PDF (2661KB)(97)       Save

With the rapid development of Internet of Vehicles (IoV), smart connected vehicles generate a large number of latency-sensitive and computation-intensive tasks, and limited vehicle computing resources and traditional cloud service modes cannot meet the needs of in-vehicle users. Mobile Edge Computing (MEC) provides an effective paradigm for solving task offloading of massive data. However, when considering multi-task and multi-user scenarios, the complexity of task offloading scenarios in IoV is high due to the real-time and dynamic changes in vehicle locations, task types and vehicle density, and the offloading process is prone to problems such as unbalanced edge resource allocation, excessive communication cost overhead and slow algorithm convergence. To solve the above problems, cooperative task offloading strategy of multiple edge servers in multi-task and multi-user mobile scenarios in IoV was focused on. First, a three-layer heterogeneous network model for multi-edge collaborative processing was proposed, and dynamic collaborative clusters were introduced for the changing environment in IoV to transform the offloading problem into a joint optimization problem of delay and energy consumption. Then, the problem was divided into two subproblems of offloading decision and resource allocation, where the resource allocation problem was further split into resource allocation for edge servers and transmission bandwidth, and the two subproblems were solved based on convex optimization theory. In order to find the optimal offloading decision set, a Multi-edge Collaborative Deep Deterministic Policy Gradient (MC-DDPG) algorithm that can handle continuous problems in collaborative clusters was proposed, based on which an Asynchronous MC-DDPG (AMC-DDPG) algorithm was designed. The training parameters in collaborative clusters were asynchronously uploaded to the cloud for global update, and then the updated results were returned to each collaborative cluster to improve the convergence speed. Simulation results show that the AMC-DDPG algorithm improves the convergence speed by at least 30% over the DDPG algorithm and achieves better results in terms of reward and total cost.

Table and Figures | Reference | Related Articles | Metrics
Task offloading method based on dynamic service cache assistance
Junna ZHANG, Xinxin WANG, Tianze LI, Xiaoyan ZHAO, Peiyan YUAN
Journal of Computer Applications    2024, 44 (5): 1493-1500.   DOI: 10.11772/j.issn.1001-9081.2023050831
Abstract100)   HTML0)    PDF (2414KB)(55)       Save

Aiming at the problem of user experience quality degradation due to the lack of comprehensive consideration of the diversity and dynamics of user service requests in the joint optimization of service caching and task offloading, a task offloading method based on dynamic service cache assistance was proposed. Firstly, to address the problem of the large action spaces for edge servers performing caching service, the actions were redefined and the optimal set of actions was selected to improve the efficiency of algorithm training. Secondly, an improved multi-agent Q-Learning algorithm was designed to learn an optimal service caching policy. Thirdly, the task offloading problem was converted into a convex optimization problem, and the optimal solution was obtained using a convex optimization tool. Finally, the optimal computational resource allocation policy was found using the Lagrangian dual method. To verify the effectiveness of the proposed method, extensive experiments were conducted based on a real dataset. Experimental results show that the response time of the proposed method is reduced by 8.5%, 11.8% and 12.6%, respectively, and the average quality of experience is improved by 1.5%, 2.7% and 4.3%, respectively, compared with Q-Learning, Double Deep Q Network (D2QN) and Multi-Agent Deep Deterministic Policy Gradient (MADDPG) method.

Table and Figures | Reference | Related Articles | Metrics
Device-to-device content sharing mechanism based on knowledge graph
Xiaoyan ZHAO, Yan KUANG, Menghan WANG, Peiyan YUAN
Journal of Computer Applications    2024, 44 (4): 995-1001.   DOI: 10.11772/j.issn.1001-9081.2023040500
Abstract161)   HTML38)    PDF (3288KB)(582)       Save

Device-to-Device(D2D) communication leverages the local computing and caching capabilities of the edge network to meet the demand for low-latency, energy-efficient content sharing among future mobile network users. The performance improvement of content sharing efficiency in edge networks not only depends on user social relationships, but also heavily relies on the characteristics of end devices, such as computation, storage, and residual energy resources. Therefore, a D2D content sharing mechanism was proposed to maximize energy efficiency with multidimensional association features of user-device-content, which took into account device heterogeneity, user sociality, and interest difference. Firstly, the multi-objective constraint problem about the user cost-benefit maximization was transformed into the optimal node selection and power control problem. And the multi-dimensional knowledge association features and the graph model for user-device-content were constructed by processing structurally multi-dimensional features related to devices, such as computing resources and storage resources. Then, the willingness measurement methods of users on device attributes and social attributes were studied, and a sharing willingness measurement method was proposed based on user socialization and device graphs. Finally, according to user sharing willingness, a D2D collaboration cluster oriented to content sharing was constructed, and a power control algorithm based on shared willingness for energy efficiency was designed to maximize the performance of network sharing. The experimental results on a real user device dataset and infocom06 dataset show that, compared to nearest selection algorithm and a selection algorithm without considering device willingness, the proposed power control algorithm based on shared willingness improves the system sum rate by about 97.2% and 11.1%, increases the user satisfaction by about 72.7% and 4.3%, and improves the energy efficiency by about 57.8% and 9.7%, respectively. This verifies the effectiveness of the proposed algorithm in terms of transmission rate, energy efficiency and user satisfaction.

Table and Figures | Reference | Related Articles | Metrics
A Lightweight Distributed Social Distance Routing Algorithm in Opportunistic Networks
Peiyan Yuan Ming-Yang SONG
  
Accepted: 03 August 2017